252 research outputs found

    Moduli Spaces of Embedded Constant Mean Curvature Surfaces with Few Ends and Special Symmetry

    Full text link
    We give necessary conditions on complete embedded \cmc surfaces with three or four ends subject to reflection symmetries. The respective submoduli spaces are two-dimensional varieties in the moduli spaces of general \cmc surfaces. We characterize fundamental domains of our \cmc surfaces by associated great circle polygons in the three-sphere.Comment: latex2e, AMS-latex, 24 page

    Making Decisions that Reduce Discriminatory Impacts

    Get PDF
    As machine learning algorithms move into realworld settings, it is crucial to ensure they are aligned with societal values. There has been much work on one aspect of this, namely the discriminatory prediction problem: How can we reduce discrimination in the predictions themselves? While an important question, solutions to this problem only apply in a restricted setting, as we have full control over the predictions. Often we care about the non-discrimination of quantities we do not have full control over. Thus, we describe another key aspect of this challenge, the discriminatory impact problem: How can we reduce discrimination arising from the real-world impact of decisions? To address this, we describe causal methods that model the relevant parts of the real-world system in which the decisions are made. Unlike previous approaches, these models not only allow us to map the causal pathway of a single decision, but also to model the effect of interference–how the impact on an individual depends on decisions made about other people. Often, the goal of decision policies is to maximize a beneficial impact overall. To reduce the discrimination of these benefits, we devise a constraint inspired by recent work in counterfactual fairness (Kusner et al., 2017), and give an efficient procedure to solve the constrained optimization problem. We demonstrate our approach with an example: how to increase students taking college entrance exams in New York City public schools

    Causal Reasoning for Algorithmic Fairness

    Get PDF
    In this work, we argue for the importance of causal reasoning in creating fair algorithms for decision making. We give a review of existing approaches to fairness, describe work in causality necessary for the understanding of causal approaches, argue why causality is necessary for any approach that wishes to be fair, and give a detailed analysis of the many recent approaches to causality-based fairness

    Operationalizing Complex Causes: A Pragmatic View of Mediation

    Get PDF
    We examine the problem of causal response estimation for complex objects (e.g., text, images, genomics). In this setting, classical \emph{atomic} interventions are often not available (e.g., changes to characters, pixels, DNA base-pairs). Instead, we only have access to indirect or \emph{crude} interventions (e.g., enrolling in a writing program, modifying a scene, applying a gene therapy). In this work, we formalize this problem and provide an initial solution. Given a collection of candidate mediators, we propose (a) a two-step method for predicting the causal responses of crude interventions; and (b) a testing procedure to identify mediators of crude interventions. We demonstrate, on a range of simulated and real-world-inspired examples, that our approach allows us to efficiently estimate the effect of crude interventions with limited data from new treatment regimes

    Causal Reasoning for Algorithmic Fairness

    Get PDF
    In this work, we argue for the importance of causal reasoning in creating fair algorithms for decision making. We give a review of existing approaches to fairness, describe work in causality necessary for the understanding of causal approaches, argue why causality is necessary for any approach that wishes to be fair, and give a detailed analysis of the many recent approaches to causality-based fairness

    The Sensitivity of Counterfactual Fairness to Unmeasured Confounding

    Get PDF
    Causal approaches to fairness have seen substantial recent interest, both from the machine learning community and from wider parties interested in ethical prediction algorithms. In no small part, this has been due to the fact that causal models allow one to simultaneously leverage data and expert knowledge to remove discriminatory effects from predictions. However, one of the primary assumptions in causal modeling is that you know the causal graph. This introduces a new opportunity for bias, caused by misspecifying the causal model. One common way for misspecification to occur is via unmeasured confounding: the true causal effect between variables is partially described by unobserved quantities. In this work we design tools to assess the sensitivity of fairness measures to this confounding for the popular class of non-linear additive noise models (ANMs). Specifically, we give a procedure for computing the maximum difference between two counterfactually fair predictors, where one has become biased due to confounding. For the case of bivariate confounding our technique can be swiftly computed via a sequence of closed-form updates. For multivariate confounding we give an algorithm that can be efficiently solved via automatic differentiation. We demonstrate our new sensitivity analysis tools in real-world fairness scenarios to assess the bias arising from confounding

    Causal Effect Inference for Structured Treatments

    Get PDF
    We address the estimation of conditional average treatment effects (CATEs) for structured treatments (e.g., graphs, images, texts). Given a weak condition on the effect, we propose the generalized Robinson decomposition, which (i) isolates the causal estimand (reducing regularization bias), (ii) allows one to plug in arbitrary models for learning, and (iii) possesses a quasi-oracle convergence guarantee under mild assumptions. In experiments with small-world and molecular graphs we demonstrate that our approach outperforms prior work in CATE estimation

    Premi\`ere valeur propre du laplacien, volume conforme et chirurgies

    Full text link
    We define a new differential invariant a compact manifold by VM(M)=infgVc(M,[g])V_{\mathcal M}(M)=\inf_g V_c(M,[g]), where Vc(M,[g])V_c(M,[g]) is the conformal volume of MM for the conformal class [g][g], and prove that it is uniformly bounded above. The main motivation is that this bound provides a upper bound of the Friedlander-Nadirashvili invariant defined by \inf_g\sup_{\tilde g\in[g]}\lambda_1(M,\tilde g)\Vol(M,\tilde g)^{\frac 2n}. The proof relies on the study of the behaviour of VM(M)V_{\mathcal M}(M) when one performs surgeries on MM.Comment: 11 pages, 5 figures, in Frenc
    corecore